Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Sparse Activation Parameters
# Sparse Activation Parameters
Moe LLaVA Qwen 1.8B 4e
Apache-2.0
MoE-LLaVA is a large vision-language model based on the Mixture of Experts architecture, achieving efficient multimodal learning through sparse activation parameters
Text-to-Image
Transformers
M
LanguageBind
176
14
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase